307 research outputs found
Recommended from our members
The Changing Nature of Mass Belief Systems: The Rise of Concept Ideologues & Policy Wonks
In today’s world of intense ideological conflict at the elite level, the nature of mass belief systems has changed dramatically since the last time Converse’s famous levels of conceptualization (Campbell et al., 1960; Converse, 1964) were coded in 2000. This paper shows that the percentage with well-developed belief systems based on a clear understanding of public policy choices has increased substantially since then. It also introduces a new category termed “policy wonks” to reflect a sub-category that Converse only referred to in passing but which is now quite common. Unlike respondents whom I classify as “concept ideologues” in this paper, policy wonks do not employ overarching concepts such as liberalism/conservatism or the scope of government. Rather, policy wonks just refer to at least three public policy stands when asked what they like and dislike about the major parties and presidential candidates. Although it was very rare for citizens in the 1950s to show a clear belief system based on the specific choices of government action, today’s highly intense and polarized policy debates have made programmatic-oriented belief systems quite common. A close examination of policy wonks shows that they are just as politically knowledgeable and consistent on issue dimensions as concept ideologues (i.e., those who employ ideological terms). Hence, policy wonks possess a well-defined belief system based on employing an understanding of public policy, thereby befitting Converse’s criteria for classification at the top level of conceptualization. The substantial increases in both concept ideologues and policy wonks accounts for virtually all of the increase since the 1980s in respondents whose partisanship matches their ideology (i.e., conservative Republicans and liberal Democrats). Not only are respondents at the top of levels of conceptualization more numerous than they used to be, but being more consistent than they used to be has led to a marked increase in the overall correspondence between partisanship and ideology. On the other hand, the decrease in ideologically inconsistent partisans (i.e., liberal Republicans and conservative Democrats) has occurred across all conceptualization levels. Thus, party polarization is a combination of: 1) better-developed belief systems increasing ideological-partisan consistency; and 2) partisan sorting decreasing partisans who are out step with their party’s ideological stance.Past research has shown that Republicans are substantially more likely to be ideologues whereas Democrats are much more inclined to conceptualize politics in terms of group benefits. This pattern was quite evident in the 2008 and 2012 American National Election Study (ANES) responses that I personally coded. However, two developments occurred in 2016 that dramatically reshaped the partisan nature of belief systems. First, the Bernie Sanders wing of the Democratic Party evidenced a great deal of ideological thinking, thereby pushing Democrats to a record percentage at the top level of ideological conceptualization. Second, the voters who supported Trump in the Republican primaries were much less likely to be ideologues or policy wonks than those who supported more traditional Republican candidates. These developments combined to make Democrats and Republicans more similar than ever before in terms of ideological conceptualization in 2016.
Recommended from our members
THE CHANGING NATURE OF MASS BELIEF SYSTEMS: THE RISE OF CONCEPT AND POLICY IDEOLOGUES
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising processwell
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.Comment: 17 pages, 13 figure
Emergent Linear Representations in World Models of Self-Supervised Sequence Models
How do sequence models represent their decision-making process? Prior work
suggests that Othello-playing neural network learned nonlinear models of the
board state (Li et al., 2023). In this work, we provide evidence of a closely
related linear representation of the board. In particular, we show that probing
for "my colour" vs. "opponent's colour" may be a simple yet powerful way to
interpret the model's internal state. This precise understanding of the
internal representations allows us to control the model's behaviour with simple
vector arithmetic. Linear representations enable significant interpretability
progress, which we demonstrate with further exploration of how the world model
is computed
Ordered Treemap Layouts (2001)
Treemaps, a space-filling method of visualizing large hierarchical data sets, are receiving increasing attention. Several algorithms have been proposed to create more useful displays by controlling the aspect ratios of the rectangles that make up a treemap. While these algorithms do improve visibility of small items in a single layout, they introduce instability over time in the display of dynamically changing data, and fail to preserve an ordering of the underlying data. This paper introduces the ordered treemap, which addresses these two shortcomings. The ordered treemap algorithm ensures that items near each other in the given order will be near each other in the treemap layout. Using experimental evidence from Monte Carlo trials, we show that compared to other layout algorithms ordered treemaps are more stable while maintaining relatively low aspect ratios of the constituent rectangles. A second test set uses stock market data
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
We introduce Inference-Time Intervention (ITI), a technique designed to
enhance the truthfulness of large language models (LLMs). ITI operates by
shifting model activations during inference, following a set of directions
across a limited number of attention heads. This intervention significantly
improves the performance of LLaMA models on the TruthfulQA benchmark. On an
instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from
32.5% to 65.1%. We identify a tradeoff between truthfulness and helpfulness and
demonstrate how to balance it by tuning the intervention strength. ITI is
minimally invasive and computationally inexpensive. Moreover, the technique is
data efficient: while approaches like RLHF require extensive annotations, ITI
locates truthful directions using only few hundred examples. Our findings
suggest that LLMs may have an internal representation of the likelihood of
something being true, even as they produce falsehoods on the surface.Comment: code: https://github.com/likenneth/honest_llam
ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing
Evaluating outputs of large language models (LLMs) is challenging, requiring
making -- and making sense of -- many responses. Yet tools that go beyond basic
prompting tend to require knowledge of programming APIs, focus on narrow
domains, or are closed-source. We present ChainForge, an open-source visual
toolkit for prompt engineering and on-demand hypothesis testing of text
generation LLMs. ChainForge provides a graphical interface for comparison of
responses across models and prompt variations. Our system was designed to
support three tasks: model selection, prompt template design, and hypothesis
testing (e.g., auditing). We released ChainForge early in its development and
iterated on its design with academics and online users. Through in-lab and
interview studies, we find that a range of people could use ChainForge to
investigate hypotheses that matter to them, including in real-world settings.
We identify three modes of prompt engineering and LLM hypothesis testing:
opportunistic exploration, limited evaluation, and iterative refinement.Comment: 23 pages, 7 figures, in submissio
- …